Intent Classification is one of the important tasks in natural language understanding which aims to classify queries based on their Intent, goal, or purpose which is implied in the content. However, the problem in this task is the lack of data labeled by a human agent. This problem leads to weakness in the generalization of models, especially when the models face rare words. Using pre-trained models can be useful in the generalization of language representation. BERT pre-trained language model which has been currently published has made a major impact in the natural language processing field. This model which is trained on unlabeled large-scale corpora, with fine-tuning could achieve state-of-art results in various natural language processing tasks e. g. question answering and sentiment analysis. In this paper, we compared the BERT model with conventional machine learning models and shown that the BERT model has a better performance than conventional machine learning models in few-shot learning.